SUPPORT OPERATIONS · AI ARCHITECTURE

OPERATIONAL DEBT
HAS A DESIGN SOLUTION.

67% New hire ramp reduction
95% Initial Response SLA attained
5 min Average Speed of Answer

Nearly a decade building systems that close the gap between how support operations work and how they should.

THE SERIES

Four systems. One compounding argument.

Each system was built under the same constraints — no CRM customization, no new fields, no formal program support. Each one solved a real problem. Each one produced the structured data the next system needed. The Case Management Dashboard is where they converge.

CS — 01 9mo → 3mo
CS — 02 35% → 95% SLA
CS — 03 3wk → same week
CAPSTONE Convergence Point
CASE STUDY — 01
67% RAMP TIME REDUCTION

Architecting the Foundation

THE PROBLEMNew hires took 9 months to reach consistent quarterly performance. Official process documentation was written for compliance, not floor execution — leaving reps to rely on memory and case-by-case interpretation.

  • Built a lifecycle-structured Confluence hub that bridged policy documentation to floor-level execution — organized around the stages of a case, not by topic or role
  • Designed a GenAI-assisted First Call Summary Email form that embedded defect documentation literacy into daily casework before formal training arrived
  • Created a Defect Draft Form that accelerated the Associate-to-Analyst feedback loop and built the escalation muscle before promotion was on the table

“Embed skill-building into daily practice, not into training events. Muscle memory built during live casework is more durable than knowledge acquired in isolation.”

Full case study →
CASE STUDY — 02
95% INITIAL RESPONSE SLA ATTAINED

Eliminating Queue Ambiguity

THE PROBLEMInitial Response SLA attainment was at 35%. Cases arrived into an uncoordinated queue — not because reps were neglecting it, but because watching the queue was no one’s explicit job in that moment.

  • Borrowed turn-order mechanics from RPG combat systems to model case load as a weighted point total — severity, escalation status, defect attachment, product complexity — not a flat case count
  • Fed two years of sanitized historical case data to GenAI to produce base point values and multipliers, then distilled the scoring into a self-sufficient offline HTML/JS calibration tool
  • Deployed a daily distribution sequence coordinated through Slack — one line, one tag, one notification — complexity in the model, simplicity in the execution

“Borrow the right algorithm, not the familiar one. Simple case count was the obvious metric. It was also wrong.”

Full case study →
CASE STUDY — 03
Same week QUARTERLY REVIEW DELIVERY

Closing the Feedback Loop

THE PROBLEMThe quarterly scorecard arrived with 300–400 records requiring isolation and exclusion review before a single Performance Review could be written. The first month of every new quarter was consumed closing out the previous one.

  • Reverse-engineered the official scorecard methodology to build four tightly-scoped CRM Saved Searches that replicated the exact scoring logic — so weekly numbers were reliable predictors, not parallel metrics
  • Built a locally-run AI-assisted review tool that processed four weekly CSV exports, handled exclusion review in increments of 5–10 records, and generated individualized PDF scorecards per rep
  • Shifted the rep’s relationship with performance data from passive quarterly verdict to active mid-quarter coaching — reps began seeking feedback before the quarter ended

“A review cycle is only as fast as its slowest task. The system did not make any individual task faster. It redistributed them so none were standing in the line when the scorecard arrived.”

Full case study →
WHERE THE SERIES POINTS

The AI-Forward Case Management Dashboard

This is not a concept. It is a designed architecture with a provable lineage.

Three systems — built without CRM customization, without new fields, without formal program support — produced structured, machine-readable operational data that no single system could synthesize. The Dashboard is the convergence point: a RAG-powered AI coach embedded in the case workflow, an agentic evaluation layer replacing batch review, and continuous performance signals replacing the quarterly verdict.

Open full mockup →
AGENT 01

The Guide

The rep-facing AI coach embedded in the case workflow.

  • Queries three RAG pipelines — Support Process, Technical Documents, Use-Case KB — in real time during active investigation
  • Generates a sequenced Investigation Checklist and surfaces process Action Items the rep can immediately validate
  • Produces structured Defect and Enhancement drafts from Checklist and Journal data, formatted to spec, for rep review
AGENT 02

The Coach

Continuous state-watching. Does not wait for triggers.

  • Observes case record for positioning changes — severity shifts, escalation flags, defect indicators, sentiment signals
  • Signals the Guide when state changes warrant a checklist update; writes directly when urgency is immediate
  • Catches Mean Time To Escalate misses before they happen — defect indicators accumulate gradually; the Coach does not have that blind spot
AGENT 03

The Scribe

Trigger-driven evaluation at Case Save, End of Shift, and Case Close.

  • Pre-populates performance-relevant fields using the scorecard methodology reverse-engineered in Case Study 03
  • Flags exclusion candidates for manager confirmation — the agent detects, the manager decides
  • Feeds the performance dashboard with standing as current as the last trigger point, replacing the quarterly verdict with a continuous signal

The dashboard requires a CRM integration layer capable of surfacing a custom interface within the case record. Three RAG pipelines with vector database infrastructure, ingestion pipelines per knowledge source, and low-latency retrieval during active cases. An orchestration layer managing three separate agents and their communication protocol. A real-time speech-to-text integration for call transcription.

On the organizational side: director-level CRM approval, alignment with Support Insights on the performance methodology, knowledge base governance agreements with Technical Writing, Product Management, and Support Knowledge Management, and a change management process for frontline reps and managers.

What the preceding case studies already established: the organizational relationships, the proven performance methodology, the structured documentation practices, and the demonstrated willingness of peer managers to adopt and adapt. The prior work does not build the dashboard. But it builds the conditions in which the dashboard is a coherent next step rather than a speculative one.

0%Ramp Time Reduction
0%Initial Response SLA
0 minAverage Speed of Answer
$0M+Enterprise ARR Protected
Same weekQuarterly Review Delivery
0–40New Hires Reached
THE OPERATOR

Kevin Dumpit

Support Operations Leader · Oracle NetSuite

I lead support teams the way a systems engineer approaches infrastructure: find the structural failure, design the fix, measure it, and build the next layer on top of what worked. AI is the tool. We are the equipment.

Mississauga, Ontario
THE STACK
Frontier Models
Claude Sonnet 4.6 · Kimi K2.5 · Gemini
Local & Secure
Ollama · LM Studio · Qwen 3.5 9B
Dev Environment
Antigravity IDE · Claude Code
[PLACEHOLDER_CLIMBING_IMAGE]
THE PHYSICAL PUZZLE

A problem you solve after falling off the wall a hundred times.

[PLACEHOLDER_DIVING_IMAGE]
THE QUIET STATE

When the gear and I are perfectly in sync — it all just becomes quiet.

[PLACEHOLDER_DRONES_IMAGE]
THE FEEDBACK LOOP

Highest exhilaration-per-mAh. Off to do powerloops and Split-Ss.

CONTACT

Available for the right conversation.

Support Management · Operations & Process Improvement · AI Strategy & Transformation

AI is the Tool. We are the Equipment.